Efficient Estimation of Word Representations in Vector Space
Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean
(Submitted on 16 Jan 2013 (v1), last revised 7 Sep 2013 (this version, v3))
We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities.
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:1301.3781 [cs.CL]
(or arXiv:1301.3781v3 [cs.CL] for this version)
标题 | 说明 | 附加 |
---|---|---|
Efficient Estimation of Word Representations in Vector Space | 论文原文 | 20130116 |
TensorFlow学习笔记3:词向量 | 论文解读 | 20160316 |